Transition from Artificial Narrow to Artificial General Intelligence Governance
- Posted by JGlenn
- On 12 April 2021
- 0 Comments
Phase 1 of the AGI study collected the views of 55 AGI leaders in the US, China, UK, the European Union, Canada, and Russia to the 22 questions below (the list of leaders follows the questions). Phase 1 research was financially supported by the Dubai Future Foundation and the Future of Life Institute:
Order the Phase 1 download here (original English, Spanish or Chinese translations)
Phase 2 is a Real-time Delphi Study that assessed 40 potential regulations for developers, governments, UN Multi-Stakeholder hybrid (Human-AI) organization, and users for trusted global and national governance of AGI. The RTDelphi is now closed and report is being prepared.
Phase 3 is developing five alternative scenarios on AGI governance out to 2035 illustrating a range of possible futures from failures to successes.
Phase 1 Questions:
Origin or Self-Emergence
- How do you envision the possible trajectories ahead, from today’s AI, to much more capable AGI in the future?
- What are the most important serious outcomes if these trajectories are not governed, or are governed badly?
- What are some key initial conditions for AGI so that an artificial super intelligence does not emerge later that is not to humanity’s liking?
Value alignment, morality, values
- Drawing on the work of the Global Partnership on Artificial Intelligence (GPAI) and others that have already identified norms, principles, and values, what additional or unique values should be considered for AGI?
- If a hierarchy of values becomes necessary for international treaties and a governance system, what should be the top priorities?
- How can alignment be achieved? If you think it is not possible, then what is the best way to manage this situation?
Governance and Regulations
- How to manage the international cooperation necessary to build international agreements and a global governance system while nations and corporations are in an intellectual “arms race” for global leadership?
- What options or models are there for global governance of AGI?
- What risks arise from attempts to govern the emergence of AGI? (Might some measures be counterproductive?)
- Should future AGIs be assigned rights?
- How can governance be flexible enough to respond to new issues previously unknown at the time of creating that governance system?
- What international governance trials, tests, or experiments can be constructed to inform the text of an international AGI treaty?
- How can international treaties and a governance system prevent increased centralization of power crowding out others?
- Where is the most important or insightful work today being conducted on global governance of AGI?
Control
- What enforcement powers will be needed to make an international AGI treaty effective?
- How can the use of AGI by organized crime and terrorism be reduced or prevented? (Please consider new types of crimes and terrorism which might be enabled by AGI.)
- Assuming AGI audits would have to be continuous rather than one-time certifications, how would audit values be addressed?
- What disruptions could complicate the task of enforcing AGI governance?
- How can a governance model correct undesirable action unanticipated in utility functions?
- How will quantum computing affect AGI control?
- How can international agreements and a governance system prevent an AGI “arms race” and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare?
And last: 22. What additional issues and/or questions need to be addressed to have a positive AGI outcome?
Initial sample of potential governance models for AGI*
- IAEA-like model or WTO-like with enforcement powers. These are the easiest to understand, but likely to be too static to manage AGI.
- IPCC-like model in concert with international treaties. This approach has not led to a governance system for climate change.
- Online real-time global collective intelligence system with audit and licensing status, governance by information power. This would be useful to help select and use an AGI system, but no proof that information power would be sufficient to govern the evolution of AGI.
- GGCC (Global Governance Coordinating Committees) would be flexible and enforced by national sanctions, ad hoc legal rulings in different countries, and insurance premiums. This has too many ways for AGI developers to avoid meeting standards.
- UN, ISO and/or IEEE standards used for auditing and licensing. Licensing would affect purchases and would have impact, but requires international agreement or treaty with all countries ratifying.
- Put different parts of AGI governance under different bodies like ITU, WTO, WIPO. Some of this is likely to happen but would not be sufficient to govern all instances of AGI systems.
- Decentralized Semi-Autonomous TransInstitution. This could be the most effective, but the most difficult to establish since both Decentralized Semi-Autonomous Organizations and TransInstitutions are new concepts.
*Drawn from “Artificial General Intelligence Issues and Opportunities,” by Jerome C. Glenn contracted by the EC for input to Horizons 2024-27 planning.
AGI Experts and Thought Leaders in Phase 1
- Sam Altman, via YouTube and OpenAI Blog, CEO OpenAI
- Anonymous, AGI Existential Risk OECD (ret.)
- Yoshua Bengio. AI pioneer, Quebec AI Institute and the University of Montréal
- Irakli Beridze, UN Interregional Crime and Justice Res. Ins. Ct. for AI and Robotics
- Nick Bostrom, Future of Humanity Institute at Oxford University
- Gregg Brockman, OpenAI co-founder
- Vint Cerf, Internet Evangelist, V.P. Google.
- Shaoqun CHEN, CEO of Shenzhen Zhongnong Net Company
- Anonymous, at Jing Dong AI Research Institute, China
- Pedro Domingos, University of Washington
- Dan Faggella, Emerj Artificial Intelligence Research
- Lex Fridman, MIT and Podcast host
- Bill Gates
- Ben Goertzel, CEO SingularityNET
- Yuval Noah Harari, Hebrew University, Israel
- Tristan Harris, Center for Humane Technology
- Demis Hassabis, CEO and co-founder of DeepMind
- Geoffrey Hinton, AI pioneer, Google (ret)
- Lambert Hogenhout, Chief Data, Analytics and Emerging Technologies, UN Secretariat
- Erik Horvitz, Chief Scientific Officer, Microsoft
- Anonymous, Information Technology Hundred People Association, China
- Anonymous, China Institute of Contemporary International Relations
- Andrej Karpathy, Open AI, former AI S Researcher Tesla
- David Kelley, AGI Lab
- Dafne Koller, Stanford University, Coursera
- Ray Kurzweil, Director of Engineering Machine Learning, Google
- Connor Leahy, CEO Conjecture
- Yann LeCun, Professor New York University, Chief Scientist for Meta
- Shane Legg, co-founder of DeepMind
- Fei Fei Li, Stanford University, Human Centered AI
- Erwu Liu, Tongji University AI and Blockchain Intelligence Laboratory
- Gary Marcus, NYU professor emeritus
- Dale Moore, US Dept of Defense AI consultant
- Emad Mostaque, CEO of Stability.ai
- Elon Musk
- Gabriel Mukobi, PhD student Stanford University
- Anonymous, National Research University Higher School of Economics
- Judea Pearl, Professor UCLA
- Sundar Pichai, Google CEO
- Francesca Rossi, Pres. of AAAI, IBM Fellow and IBM’s AI Ethics Global Leader
- Anonymous, Russian Academy of Science
- Stuart Russell, UC Berkeley
- Karl Schroeder, Science Fiction Author
- Bart Selman, Cornel University
- Javier Del Ser, Tecnalia, Spain
- David Shapiro, AGI Alignment Consultant
- Yesha Sivan, Founder and CEO of i8 Ventures
- Ilya Sutstkever, Open AI co-founder
- Jaan Tallinn, Ct. Study of Existential Risk at Cambridge Univ., and Future of Life Institute
- Max Tegmark, Future of Life Institute and MIT
- Peter Voss, CEO and Chief Scientist at Aigo.ai
- Paul Werbos, National Science Foundation (ret.)
- Stephen Wolfram, Wolfram Alpha, Wolfram Language
- Yudong Yang, Alibaba’s DAMO Research Institute
- Eliezer Yudkowsky Machine Intelligence Research Institute
There are many excellent centers studying values for and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies; even the most comprehensive and detailed US National Security Commission on Artificial Intelligence Report has little mention.[iv] Current work on AI governance is trying to catch up with the artificial narrow intelligence that is proliferating worldwide today; we also need to jump ahead to anticipate governance needs of what AGI could become.
It is argued that creating rules for governance of AGI too soon will stifle its development. Expert judgments vary about when AGI will be possible; however, some working to develop AGI believe it is possible to have AGI as soon as ten years.[v] Since it is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation, then it would be wise to begin exploring potential governance approaches and their potential effectiveness now.
The Millennium Project is now seeking sponsors and collaborators for this research. Email Jerome.Glenn@Millennium-Project.org or info@millennium-project.org.
Updates:
- February 2023, Publication, “Artificial General Intelligence Issues and Opportunities” by Jerome C. Glenn contracted by the EC for input to the Foresight for the 2nd Strategic Plan of Horizon Europe (2025-27).
- December 2022, Podcast, “Global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI)“, with Jerome C. Glenn on London Futurist.
- December 2022, Launch, First steps on General Artificial Intelligence Governance Study.
[i] Stephen Hawkings says A.I. could be ‘worst event in the history of our civilization.’ https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html
[ii] Elon Musk says all advanced AI development should be regulated, including at Tesla: https://techcrunch.com/2020/02/18/elon-musk-says-all-advanced-ai-development-should-be-regulated-including-at-tesla/
[iii] Microsoft’s Bill Gates insists AI is a threat https://www.bbc.com/news/31047780
[iv] The full final report of US National Security Commission on Artificial Intelligence is available at https://www.nscai.gov
[v] AI Multiple, 995 experts opinion: AGI / singularity by 2060 [2021 update] https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/, December 31, 2020
0 Comments